Scene Semantic Recognition Based on Modified Fuzzy C-Mean and Maximum Entropy Using Object-to-Object Relations
نویسندگان
چکیده
With advances in machine vision systems (e.g., artificial eye, unmanned aerial vehicles, surveillance monitoring) scene semantic recognition (SSR) technology has attracted much attention due to its related applications such as autonomous driving, tourist navigation, intelligent traffic and remote sensing. Although tremendous progress been made visual interpretation, several challenges remain (i.e., dynamic backgrounds, occlusion, lack of labeled data, changes illumination, direction, size). Therefore, we have proposed a novel SSR framework that intelligently segments the locations objects, generates Bag Features, recognizes scenes via Maximum Entropy. First, denoising smoothing are applied on data. Second, modified Fuzzy C-Means integrates with super-pixels Random Forest for segmentation objects. Third, these segmented objects used extract Features concatenate different blobs, multiple orientations, Fourier transform geometrical points over An Artificial Neural Network using patterns Finally, labels estimated Entropy model. During experimental evaluation, our system illustrated remarkable mean accuracy rate 90.07% MSRC dataset 89.26% Caltech 101 object recognition, 93.53% Pascal-VOC12 respectively. The should be applicable various emerging technologies, augmented reality, represent real-world environment military training engineering design, well entertainment, eyes visually impaired people monitoring avoid congestion or road accidents.
منابع مشابه
Object Recognition based on Local Steering Kernel and SVM
The proposed method is to recognize objects based on application of Local Steering Kernels (LSK) as Descriptors to the image patches. In order to represent the local properties of the images, patch is to be extracted where the variations occur in an image. To find the interest point, Wavelet based Salient Point detector is used. Local Steering Kernel is then applied to the resultant pixels, in ...
متن کامل3D Scene and Object Classification Based on Information Complexity of Depth Data
In this paper the problem of 3D scene and object classification from depth data is addressed. In contrast to high-dimensional feature-based representation, the depth data is described in a low dimensional space. In order to remedy the curse of dimensionality problem, the depth data is described by a sparse model over a learned dictionary. Exploiting the algorithmic information theory, a new def...
متن کاملNonparametric Object and Scene Recognition
With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of nonparametric methods, we explore this world with the aid of a large data set of 79,302,017 images collected from the Web. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in ima...
متن کاملMaximum Entropy and Gaussian Models for Image Object Recognition
The principle of maximum entropy is a powerful framework that can be used to estimate class posterior probabilities for pattern recognition tasks. In this paper, we show how this principle is related to the discriminative training of Gaussian mixture densities using the maximum mutual information criterion. This leads to a relaxation of the constraints on the covariance matrices to be positive ...
متن کاملObject Recognition Using Appearance-Based Parts and Relations
The recognition of general three-dimensional objects in cluttered scenes is a challenging problem. In particular, the design of a good representation suitable to model large numbers of generic objects that is also robust to occlusion has been an stumbling block in achieving success. In this paper, we propose a representation using appearance-based parts and relations to overcome these problems....
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2021
ISSN: ['2169-3536']
DOI: https://doi.org/10.1109/access.2021.3058986